20 research outputs found

    DATA REPLICATION IN DISTRIBUTED SYSTEMS USING OLYMPIAD OPTIMIZATION ALGORITHM

    Get PDF
    Achieving timely access to data objects is a major challenge in big distributed systems like the Internet of Things (IoT) platforms. Therefore, minimizing the data read and write operation time in distributed systems has elevated to a higher priority for system designers and mechanical engineers. Replication and the appropriate placement of the replicas on the most accessible data servers is a problem of NP-complete optimization. The key objectives of the current study are minimizing the data access time, reducing the quantity of replicas, and improving the data availability. The current paper employs the Olympiad Optimization Algorithm (OOA) as a novel population-based and discrete heuristic algorithm to solve the replica placement problem which is also applicable to other fields such as mechanical and computer engineering design problems. This discrete algorithm was inspired by the learning process of student groups who are preparing for the Olympiad exams. The proposed algorithm, which is divide-and-conquer-based with local and global search strategies, was used in solving the replica placement problem in a standard simulated distributed system. The 'European Union Database' (EUData) was employed to evaluate the proposed algorithm, which contains 28 nodes as servers and a network architecture in the format of a complete graph. It was revealed that the proposed technique reduces data access time by 39% with around six replicas, which is vastly superior to the earlier methods. Moreover, the standard deviation of the results of the algorithm's different executions is approximately 0.0062, which is lower than the other techniques' standard deviation within the same experiments

    A mining-based approach for selecting best resources nodes on grid resource broker

    Get PDF
    Nowadays, Grid Computing has been accepted as an infrastructure to perform parallel computing in distributed computational resources (Karl et al., 2001). Grid has users, resources, and an information service (IS). Grid computing is new technology for creating distributed infrastructures and virtual organizations (VOs) for applying a very largescale computing or enterprise applications. In a grid environment, the computational resource is main part of system that can be a Desktop PC, Cluster machine or supercomputer. A main goal of grid computing is enabling applications to identify resources dynamically to create distributed computing environments that can utilize computing resources on demand (Karl et al., 2001)

    Localized job scheduling system using cooperative and system-centric scheduling policy for market-oriented grids

    Get PDF
    In grid scheduling systems, a major challenge is to manage the consumers' job based on their Quality of Service (QoS) and provider nodes' satisfaction. Most of the capable job scheduling polices operate on the basis of meta-scheduling systems that may result in a complex management with overcrowding probability in market-oriented grids. Therefore, the need for an efficient job scheduling algorithm based on the local scheduling policy is vital to reduce overcrowding in the meta-scheduling system. This paper presents an efficient scheduling approach concerning the localization of job scheduling policy. This approach adapts appropriate computing nodes to jobs by taking their throughput into account and QoS requirements. The job management policy of the proposed scheduling approach focuses specially on increasing the job submission rate using accurate estimation procedure and completing the submitted jobs within defined deadline. Experiments were designed to study the performance of the proposed job scheduling approach. We compared the performance of the proposed approach with other popular algorithms and policies based on general and standard metrics. Results show an increase in the performance and user satisfaction criteria. Additionally, the overdue time for jobs which is a main concern in market-oriented grid computing systems was significantly improved

    An Optimized K-Harmonic Means Algorithm Combined with Modified Particle Swarm Optimization and Cuckoo Search Algorithm

    No full text
    Among the data clustering algorithms, k-means (KM) algorithm is one of the most popular clustering techniques due to its simplicity and efficiency. However, k-means is sensitive to initial centers and it has the local optima problem. K-harmonic-means (KHM) clustering algorithm solves the initialization problem of k-means algorithm, but it also has local optima problem. In this paper, we develop a new algorithm for solving this problem based on an improved version of particle swarm optimization (IPSO) algorithm and KHM clustering. In the proposed algorithm, IPSO is equipped with Cuckoo Search algorithm and two new concepts used in PSO in order to improve the efficiency, fast convergence and escape from local optima. IPSO updates positions of particles based on a combination of global worst, global best with personal worst and personal best to dynamically be used in each iteration of the IPSO. The experimental result on five real-world datasets and two artificial datasets confirms that this improved version is superior to k-harmonic means and regular PSO algorithm. The results of the simulation show that the new algorithm is able to create promising solutions with fast convergence, high accuracy and correctness while markedly improving the processing time

    A predictive approach to improve a fault tolerance confidence level on grid resources scheduling

    Get PDF
    The grid is becoming more attractive and encouraging platform for solving largescale computing intensive problems. In this environment, various geographically distributed resources are logically coupled together and presented as a single integrated resource. Since resources on grid are dynamic and [farar], one of the most important problems in grid environment is design a fault tolerance resources scheduling. Therefore, finding a stable and fault tolerance resource require designing a predictive method that doing this work. Many methods are presented in a few years ago, but in these algorithms, some parameters such as job requirements and clear predictor method are not truly considered and also some methods apply optimistic view in grid scheduling cycle. On the other hand, since many methods use from GIS information to learn about resources, so they cannot powerfully select optimal nodes because the GIS don't cover all information about grid resources. Due to this disadvantage, this paper presents a new approach on fault tolerance mechanisms for the resource scheduling on grid by using Case-Based Reasoning technique in a local fashion. This approach applies a specific structure in order to prepare fault tolerance between executer nodes to retain system in a safe state with minimum data transferring. Certainly, this algorithm increases fault tolerant confidence therefore, performance of grid will be high

    A new approach for selecting best resources nodes by using fuzzy decision tree in grid resource broker

    Get PDF
    Nowadays, Grid Computing has been accepted as an infrastructure to perform parallel computing in distributed computational resources. Grid has users, resources, and an information service (IS). Resource broker service is one of the main services in grid to find resources, filter resources, allocate resources, etc. Resource selection is part of resource broker that is an important issue in a grid environment where a consumer and a service provider are distributed geographically. In this paper, we design and implement a new data mining –based Grid resource broker service for selection resources on grid environment. The role of this resource broker service is using learning method to find the best nodes according to the requirements of the job and the distributed computing resources on the Grid. The provided application can be executed on top of Globus Toolkit (GT) middleware. The results of experiments show a strong effect in improving resource finding cycl

    The Necessity of Using Cloud Computing in Educational System

    Get PDF
    Abstractcloud computing is a dynamically scalable system that provides internet-based services, often virtually. With emergence of electronic systems and removal of paper, virtual technologies and electronics are becoming important. This paper discusses the importance of online training and emphasizes on its qualitative and quantitative development for some organizations or technical science and engineering students. This paper mainly concentrates on utilizing online education based on cloud computing environments. We discuss the necessity of cloud-based educational systems for organizations and countries. Based on experience of other universities, companies, and other organizations, the challenges and issues of deploying an online education has been considered to avoid of pitfalls

    An optimized clustering algorithm using genetic algorithm and rough set theory based on kohonen self organizing map

    Get PDF
    The Kohonen self organizing map is an efficient tool in exploratory phase of data mining and pattern recognition. The SOM is a popular tool that maps high dimensional space into a small number of dimensions by placing similar elements close together, forming clusters. Recently, most of the researchers found that to take the uncertainty concerned in cluster analysis, using the crisp boundaries in some clustering operations is not necessary. In this paper, an optimized two-level clustering algorithm based on SOM which employs the rough set theory and genetic algorithm is proposed to defeat the uncertainty problem. The evaluation of proposed algorithm on our gathered poultry diseases data and Iris data expresses more accurate compared with the crisp clustering methods and reduces the errors
    corecore